The application of natural language processing (NLP) to cancer pathology reports has been focused on detecting cancer cases, largely ignoring precancerous cases. Improving the characterization of precancerous adenomas assists in developing diagnostic tests for early cancer detection and prevention, especially for colorectal cancer (CRC). Here we developed transformer-based deep neural network NLP models to perform the CRC phenotyping, with the goal of extracting precancerous lesion attributes and distinguishing cancer and precancerous cases. We achieved 0.914 macro-F1 scores for classifying patients into negative, non-advanced adenoma, advanced adenoma and CRC. We further improved the performance to 0.923 using an ensemble of classifiers for cancer status classification and lesion size named entity recognition (NER). Our results demonstrated the potential of using NLP to leverage real-world health record data to facilitate the development of diagnostic tests for early cancer prevention.
translated by 谷歌翻译
Decentralized and federated learning algorithms face data heterogeneity as one of the biggest challenges, especially when users want to learn a specific task. Even when personalized headers are used concatenated to a shared network (PF-MTL), aggregating all the networks with a decentralized algorithm can result in performance degradation as a result of heterogeneity in the data. Our algorithm uses exchanged gradients to calculate the correlations among tasks automatically, and dynamically adjusts the communication graph to connect mutually beneficial tasks and isolate those that may negatively impact each other. This algorithm improves the learning performance and leads to faster convergence compared to the case where all clients are connected to each other regardless of their correlations. We conduct experiments on a synthetic Gaussian dataset and a large-scale celebrity attributes (CelebA) dataset. The experiment with the synthetic data illustrates that our proposed method is capable of detecting tasks that are positively and negatively correlated. Moreover, the results of the experiments with CelebA demonstrate that the proposed method may produce significantly faster training results than fully-connected networks.
translated by 谷歌翻译
Multi-task learning (MTL) is a learning paradigm to learn multiple related tasks simultaneously with a single shared network where each task has a distinct personalized header network for fine-tuning. MTL can be integrated into a federated learning (FL) setting if tasks are distributed across clients and clients have a single shared network, leading to personalized federated learning (PFL). To cope with statistical heterogeneity in the federated setting across clients which can significantly degrade the learning performance, we use a distributed dynamic weighting approach. To perform the communication between the remote parameter server (PS) and the clients efficiently over the noisy channel in a power and bandwidth-limited regime, we utilize over-the-air (OTA) aggregation and hierarchical federated learning (HFL). Thus, we propose hierarchical over-the-air (HOTA) PFL with a dynamic weighting strategy which we call HOTA-FedGradNorm. Our algorithm considers the channel conditions during the dynamic weight selection process. We conduct experiments on a wireless communication system dataset (RadComDynamic). The experimental results demonstrate that the training speed with HOTA-FedGradNorm is faster compared to the algorithms with a naive static equal weighting strategy. In addition, HOTA-FedGradNorm provides robustness against the negative channel effects by compensating for the channel conditions during the dynamic weight selection process.
translated by 谷歌翻译
Transformer models have achieved great success across many NLP problems. However, previous studies in automated ICD coding concluded that these models fail to outperform some of the earlier solutions such as CNN-based models. In this paper we challenge this conclusion. We present a simple and scalable method to process long text with the existing transformer models such as BERT. We show that this method significantly improves the previous results reported for transformer models in ICD coding, and is able to outperform one of the prominent CNN-based methods.
translated by 谷歌翻译
自动编码是表示学习的一种流行方法。常规的自动编码器采用对称编码编码程序和简单的欧几里得潜在空间,以无监督的方式检测隐藏的低维结构。这项工作介绍了一个图表自动编码器,其中具有不对称编码编码过程,该过程可以包含其他半监督信息,例如类标签。除了增强使用复杂的拓扑结构和几何结构处理数据的能力外,这些模型还可以成功区分附近的数据,但仅与少量监督相交并与歧管相交。此外,该模型仅需要较低的复杂性编码器,例如局部线性投影。我们讨论了此类网络的理论近似能力,基本上取决于数据歧管的固有维度,而不是观测值的维度。我们对合成和现实世界数据的数值实验验证了所提出的模型可以有效地通过附近的多类,但分离不同类别,重叠的歧管和具有非平凡拓扑的歧管的数据。
translated by 谷歌翻译
在本文中,我们提出了STC-GEF,这是一种新型的时空跨平台图嵌入城市交通流量预测的融合方法。我们已经设计了基于图形卷积网络(GCN)的空间嵌入模块,以在交通流数据中提取复杂的空间特征。此外,为了捕获各个时间间隔的交通流数据之间的时间依赖性,我们设计了一个基于复发神经网络的时间嵌入模块。基于观察到不同的运输平台Trip数据(例如出租车,Uber和Lyft)可以关联的观察结果,我们设计了一种有效的融合机制,该机制结合了来自不同运输平台的旅行数据,并进一步将它们用于跨平台交通流量。预测(例如,用于出租车交通流量预测的出租车和乘车共享平台)。我们根据纽约市(NYC)的黄色出租车和乘车共享(LYFT)的现实世界旅行数据进行了广泛的现实实验研究,并验证了STC-GEF在融合不同运输平台中的准确性和有效性数据并预测流量流。
translated by 谷歌翻译
在许多分类问题中,我们希望一个对一系列非语义转换具有强大的分类器。例如,无论其出现的方向和姿势如何,人都可以识别图片中的狗。存在实质性证据表明这种不变性可以显着提高机器学习模型的准确性和泛化。教导模型几何修正型的常用技术是通过变换输入来增加训练数据。但是,对于给定的分类任务期望需要哪种修正,并不总是已知的。确定有效的数据增强策略可以要求域专业知识或广泛的数据预处理。最近的努力,如自动化优化数据增强策略的参数化搜索空间,以自动化增强过程。虽然自动化和类似方法在几个常见的数据集上实现最先进的分类准确性,但它们仅限于学习一个数据增强策略。通常不同的类别或功能呼叫不同的几何修正。我们介绍了动态网络增强(DNA),从而了解输入条件增强策略。我们模型中的增强参数是神经网络的输出,并且随着网络权重被更新时被隐式学习。我们的模型允许动态增强策略,并在输入功能上具有几何变换的数据良好。
translated by 谷歌翻译
基于深入的学习的诊断性能随着更多的注释数据而增加,但手动注释是大多数领域的瓶颈。专家在临床常规期间评估诊断图像,并在报告中写出他们的调查结果。基于临床报告的自动注释可以克服手动标记瓶颈。我们假设可以使用这些报告的稀疏信息引导的模型预测来生成用于检测任务的密度注释。为了证明疗效,我们在放射学报告中临床显着发现的数量指导的临床上显着的前列腺癌(CSPCA)注释。我们包括7,756个前列腺MRI检查,其中3,050人被手动注释,4,706次自动注释。我们对手动注释的子集进行了自动注释质量:我们的得分提取正确地确定了99.3 \%$ 99.3 \%$ 99.3 \%$的CSPCA病变数量,我们的CSPCA分段模型正确地本地化了83.8 \ PM 1.1 \%$的病变。我们评估了来自外部中心的300名检查前列腺癌检测表现,具有组织病理学证实的基础事实。通过自动标记的考试增强培训集改善了在接收器的患者的诊断区域,从$ 88.1 \ pm 1.1 \%$至89.8 \ pm 1.0 \%$($ p = 1.2 \ cdot 10 ^ { - 4} $ )每案中的一个错误阳性的基于病变的敏感性,每案件从79.2美元2.8 \%$ 85.4 \ PM 1.9 \%$($ P <10 ^ { - 4} $),以$ alm \ pm std。$超过15个独立运行。这种改进的性能展示了我们报告引导的自动注释的可行性。源代码在https://github.com/diagnijmegen/report-guiding-annotation上公开可用。最佳的CSPCA检测算法在https://grand-challenge.org/algorithms/bpmri-cspca-detection-report-guiding-annotations/中提供。
translated by 谷歌翻译
通过机器学习在所有设计和工程领域的机器学习增益创建的数据驱动模型。他们有很高的潜力,以协助决策者创造具有更好的性能和可持续性的新人工制品。然而,有限的泛化和这些模型的黑匣子性质诱导有限的解释性和可重用性。这些缺点在工程设计中提供了延迟采用的显着障碍。为了克服这种情况,我们提出了一种基于组件的方法来通过机器学习(ml)来创建部分组件模型。该基于组件的方法对齐深入学习到系统工程(SE)。借助于节能建筑设计的示例,我们首先通过准确地预测与训练数据不同的随机结构的设计性能来证明基于组件的方法的概括。其次,我们通过从工程设计的角度来看,从低深度决策树派生的本地采样,敏感性信息和规则来说明解释性,灵敏度信息和规则。解释性的关键是,组件之间的接口处的激活是可解释的工程量。以这种方式,分层组件系统形成深度神经网络(DNN),该网络(DNN)直接集成了工程解释性的信息。组合组件中的大量可能配置允许使用可理解的数据驱动模型进行新颖的未经设计案例。通过类似的概率分布的参数范围的匹配会产生可重复使用的,普遍性和可信赖的模型。该方法适应了系统工程和域知识的工程方法模型结构。
translated by 谷歌翻译